64 research outputs found

    A Theory of Plans for Electronic Circuits

    Get PDF
    This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the Laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0643.A plan for a device assigns purposes to each of the more primitive components and explains how these components interact to achieve the desired behavior of the composite device. Such an information structure is critically important in analyzing, designing or troubleshooting devices. The first goal of this research is to develop a theory of plans for electronic circuits which can be used for these purposes. The second goal is the construction of a system which can automatically recognize a plan for a circuit from a geometrical representation of the circuit's schematic diagram. Recognition is a process which recaptures the plan the designer originally had in mind. A theory of schemata will be introduced in which recognition is viewed as the identification of an instance of a schema in the library with the particular circuit being recognized. This process is guided by topological and geometric evidence extracted from the circuit schematic. Causal reasoning, using the technique of propagation of constraints, provides further evidence. One important use of causal reasoning is the confirmation of tentative instantiations based on topological and geometric evidence alone.MIT Artificial Intelligence Laboratory Department of Defense Advanced Research Projects Agenc

    2D Density Control of Micro-Particles using Kernel Density Estimation

    Full text link
    We address the problem of 2D particle density control. The particles are immersed in dielectric fluid and acted upon by manipulating an electric field. The electric field is controlled by an array of electrodes and used to bring the particle density to a desired pattern using dielectrophoretic forces. We use a lumped, 2D, capacitive-based, nonlinear model describing the motion of a particle. The spatial dependency of the capacitances is estimated using electrostatic COMSOL simulations. We formulate an optimal control problem, where the loss function is defined in terms of the error between the particle density at some final time and a target density. We use a kernel density estimator (KDE) as a proxy for the true particle density. The KDE is computed using the particle positions that are changed by varying the electrode potentials. We showcase our approach through numerical simulations, where we demonstrate how the particle positions and the electrode potentials vary when shaping the particle positions from a uniform to a Gaussian distribution

    AI Enhanced Control Engineering Methods

    Full text link
    AI and machine learning based approaches are becoming ubiquitous in almost all engineering fields. Control engineering cannot escape this trend. In this paper, we explore how AI tools can be useful in control applications. The core tool we focus on is automatic differentiation. Two immediate applications are linearization of system dynamics for local stability analysis or for state estimation using Kalman filters. We also explore other usages such as conversion of differential algebraic equations to ordinary differential equations for control design. In addition, we explore the use of machine learning models for global parameterizations of state vectors and control inputs in model predictive control applications. For each considered use case, we give examples and results

    An optimization-based approach to automated design

    Full text link
    We propose a model-based, automated, bottom-up approach for design, which is applicable to various physical domains, but in this work we focus on the electrical domain. This bottom-up approach is based on a meta-topology in which each link is described by a universal component that can be instantiated as basic components (e.g., resistors, capacitors) or combinations of basic components via discrete switches. To address the combinatorial explosion often present in mixed-integer optimization problems, we present two algorithms. In the first algorithm, we convert the discrete switches into continuous switches that are physically realizable and formulate a parameter optimization problem that learns the component and switch parameters while inducing design sparsity through an L1L_1 regularization term. The second algorithm uses a genetic-like approach with selection and mutation steps guided by ranking of requirements costs, combined with continuous optimization for generating optimal parameters. We improve the time complexity of the optimization problem in both algorithms by reconstructing the model when components become redundant and by simplifying topologies through collapsing components and removing disconnected ones. To demonstrate the efficacy of these algorithms, we apply them to the design of various electrical circuits

    Diagnosing multiple faults.

    Get PDF
    Abstract Diagnostic tasks require determining the differences between a model of an artifact and the artifact itself. The differences between the manifested behavior of the artifact and the predicted behavior of the model guide the search for the differences between the artifact and its model. The diagnostic procedure presented in this paper is model-based, inferring the behavior of the composite device from knowledge of the structure and function of the individual components comprising the device. The system (GDE -General Diagnostic Engine) has been implemented and tested on many examples in the domain of troubleshooting digital circuits. This research makes several novel contributions: First, the system diagnoses failures due to multiple faults. Second, failure candidates are represented and manipulated in terms of minimal sets of violated assumptions, resulting in an efficient diagnostic procedure. Third, the diagnostic procedure is incremental, exploiting the iterative nature of diagnosis. Fourth, a clear separation is drawn between diagnosis and behavior prediction, resulting in a domain (and inference procedure) independent diagnostic procedure. Fifth, GDE combines modelbased prediction with sequential diagnosis to propose measurements to localize the faults. The normally required conditional probabilities are computed from the structure of the device and models of its components. This capability results from a novel way of incorporating probabilities and information theory into the context mechanism provided by AssumptionBased Truth Maintenance

    AMORD: A Deductive Procedure System

    Get PDF
    This research was conducted at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the Laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract number N00014-75-C-0643.We have implemented an interpreter for a rule-based system, AMORD, based on a non-chronological control structure and a system of automatically maintained data-dependencies. The purpose of this paper is tutorial. We wish to illustrate: (1) The discipline of explicit control and dependencies, (2) How to use AMORD, and (3) One way to implement the mechanisms provided by AMORD. This paper is organized into sections. The first section is a short "reference manual" describing the major features of AMORD. Next, we present some examples which illustrate the style of expression encouraged by AMORD. This style makes control information explicit in a rule-manipulable form, and depends on an understanding of the use of non-chronological justifications for program beliefs as a means for determining the current set of beliefs. The third section is a brief description of the Truth Maintenance System employed by AMORD for maintaining these justifications and program beliefs. The fourth section presents a completely annotated interpreter for AMORD, written in SCHEME.MIT Artificial Intelligence Laboratory Department of Defense Advanced Research Projects Agenc

    Readiness of Quantum Optimization Machines for Industrial Applications

    Full text link
    There have been multiple attempts to demonstrate that quantum annealing and, in particular, quantum annealing on quantum annealing machines, has the potential to outperform current classical optimization algorithms implemented on CMOS technologies. The benchmarking of these devices has been controversial. Initially, random spin-glass problems were used, however, these were quickly shown to be not well suited to detect any quantum speedup. Subsequently, benchmarking shifted to carefully crafted synthetic problems designed to highlight the quantum nature of the hardware while (often) ensuring that classical optimization techniques do not perform well on them. Even worse, to date a true sign of improved scaling with the number of problem variables remains elusive when compared to classical optimization techniques. Here, we analyze the readiness of quantum annealing machines for real-world application problems. These are typically not random and have an underlying structure that is hard to capture in synthetic benchmarks, thus posing unexpected challenges for optimization techniques, both classical and quantum alike. We present a comprehensive computational scaling analysis of fault diagnosis in digital circuits, considering architectures beyond D-wave quantum annealers. We find that the instances generated from real data in multiplier circuits are harder than other representative random spin-glass benchmarks with a comparable number of variables. Although our results show that transverse-field quantum annealing is outperformed by state-of-the-art classical optimization algorithms, these benchmark instances are hard and small in the size of the input, therefore representing the first industrial application ideally suited for testing near-term quantum annealers and other quantum algorithmic strategies for optimization problems.Comment: 22 pages, 12 figures. Content updated according to Phys. Rev. Applied versio
    • …
    corecore